16S Assessment

I am largely following the DADA2 tutorial by Benjamin Callahan that can be found here.

Inspecting the data

Check samples to use

##  [1] "A1"  "A10" "A11" "A12" "A2"  "A3"  "A4"  "A5"  "A6"  "A7"  "A8"  "A9" 
## [13] "B1"  "B10" "B11" "B12" "B2"  "B3"  "B4"  "B5"  "B6"  "B7"  "B8"  "B9" 
## [25] "C1"  "C10" "C11" "C12" "C2"  "C3"  "C4"  "C5"  "C6"  "C7"  "C8"  "C9" 
## [37] "G5"  "G6"



View example quality plots

Viewing the quality of 2 of the forward samples

Viewing the quality of the same 2 of the reverse samples



Filter and trim samples

Perform the filtering

You can see in above figures that the quality of reads drop off towards the end, so we need to filter out these low quality reads

##                  reads.in reads.out
## A1_16S_R1.fastq     46968     44461
## A10_16S_R1.fastq    25343     23991
## A11_16S_R1.fastq    27398     26006
## A12_16S_R1.fastq    45822     44152
## A2_16S_R1.fastq     99390     94836
## A3_16S_R1.fastq     77568     73174

After filtering out the low quality reads, we have maintained about 95.5 of the original reads.



Check the filtering

Viewing the quality of the same 2 of the forward samples post filtering and trimming

Viewing the quality of the same 2 of the reverse samples post filtering and trimming



DADA2 sample processing

Error Rates

The error rates for each possible transition (A→C, A→G, …) are shown. Points are the observed error rates for each consensus quality score. The black line shows the estimated error rates after convergence of the machine-learning algorithm. The red line shows the error rates expected under the nominal definition of the Q-score. We want the estimated error rates (black line) to be a good fit to the observed rates (points), and the error rates to drop with increased quality.



Sample Inference

We are now ready to apply the core sample inference algorithm to the filtered and trimmed sequence data.


Forward sample inference

## Sample 1 - 44461 reads in 9515 unique sequences.
## Sample 2 - 23991 reads in 6094 unique sequences.
## Sample 3 - 26006 reads in 6312 unique sequences.
## Sample 4 - 44152 reads in 7218 unique sequences.
## Sample 5 - 94836 reads in 13509 unique sequences.
## Sample 6 - 73174 reads in 9644 unique sequences.
## Sample 7 - 22069 reads in 4917 unique sequences.
## Sample 8 - 76841 reads in 25268 unique sequences.
## Sample 9 - 41897 reads in 7509 unique sequences.
## Sample 10 - 19892 reads in 5018 unique sequences.
## Sample 11 - 48504 reads in 7342 unique sequences.
## Sample 12 - 32048 reads in 7630 unique sequences.
## Sample 13 - 90475 reads in 7748 unique sequences.
## Sample 14 - 24910 reads in 5725 unique sequences.
## Sample 15 - 34782 reads in 5737 unique sequences.
## Sample 16 - 50069 reads in 5167 unique sequences.
## Sample 17 - 76155 reads in 7895 unique sequences.
## Sample 18 - 34374 reads in 5211 unique sequences.
## Sample 19 - 90460 reads in 8319 unique sequences.
## Sample 20 - 23566 reads in 5815 unique sequences.
## Sample 21 - 53651 reads in 5991 unique sequences.
## Sample 22 - 110660 reads in 10438 unique sequences.
## Sample 23 - 14928 reads in 3601 unique sequences.
## Sample 24 - 30116 reads in 6468 unique sequences.
## Sample 25 - 27927 reads in 5388 unique sequences.
## Sample 26 - 73356 reads in 7647 unique sequences.
## Sample 27 - 23594 reads in 6498 unique sequences.
## Sample 28 - 20343 reads in 4504 unique sequences.
## Sample 29 - 25590 reads in 6768 unique sequences.
## Sample 30 - 10144 reads in 3129 unique sequences.
## Sample 31 - 42054 reads in 7901 unique sequences.
## Sample 32 - 25161 reads in 7367 unique sequences.
## Sample 33 - 73293 reads in 8481 unique sequences.
## Sample 34 - 37175 reads in 5029 unique sequences.
## Sample 35 - 30216 reads in 4299 unique sequences.
## Sample 36 - 43608 reads in 5216 unique sequences.
## Sample 37 - 1795 reads in 421 unique sequences.
## Sample 38 - 162 reads in 63 unique sequences.

Inspecting the returned dada-class object for the first forward sample:

## dada-class: object describing DADA2 denoising results
## 144 sequence variants were inferred from 9515 input unique sequences.
## Key parameters: OMEGA_A = 1e-40, OMEGA_C = 1e-40, BAND_SIZE = 16

The DADA2 algorithm inferred 144 true sequence variants from the 9515 unique sequences in the first sample. There is much more to the dada-class return object than this (see help("dada-class") for some info), including multiple diagnostics about the quality of each denoised sequence variant, but that is beyond the scope of an introductory tutorial.


Reverse sample inference

## Sample 1 - 44461 reads in 9175 unique sequences.
## Sample 2 - 23991 reads in 5924 unique sequences.
## Sample 3 - 26006 reads in 6092 unique sequences.
## Sample 4 - 44152 reads in 7108 unique sequences.
## Sample 5 - 94836 reads in 12281 unique sequences.
## Sample 6 - 73174 reads in 8686 unique sequences.
## Sample 7 - 22069 reads in 4797 unique sequences.
## Sample 8 - 76841 reads in 24793 unique sequences.
## Sample 9 - 41897 reads in 7149 unique sequences.
## Sample 10 - 19892 reads in 4939 unique sequences.
## Sample 11 - 48504 reads in 7069 unique sequences.
## Sample 12 - 32048 reads in 7283 unique sequences.
## Sample 13 - 90475 reads in 7578 unique sequences.
## Sample 14 - 24910 reads in 5526 unique sequences.
## Sample 15 - 34782 reads in 5706 unique sequences.
## Sample 16 - 50069 reads in 5134 unique sequences.
## Sample 17 - 76155 reads in 7407 unique sequences.
## Sample 18 - 34374 reads in 4744 unique sequences.
## Sample 19 - 90460 reads in 8211 unique sequences.
## Sample 20 - 23566 reads in 5706 unique sequences.
## Sample 21 - 53651 reads in 5681 unique sequences.
## Sample 22 - 110660 reads in 10194 unique sequences.
## Sample 23 - 14928 reads in 3754 unique sequences.
## Sample 24 - 30116 reads in 6446 unique sequences.
## Sample 25 - 27927 reads in 5233 unique sequences.
## Sample 26 - 73356 reads in 7528 unique sequences.
## Sample 27 - 23594 reads in 6469 unique sequences.
## Sample 28 - 20343 reads in 4401 unique sequences.
## Sample 29 - 25590 reads in 6315 unique sequences.
## Sample 30 - 10144 reads in 3041 unique sequences.
## Sample 31 - 42054 reads in 7772 unique sequences.
## Sample 32 - 25161 reads in 7026 unique sequences.
## Sample 33 - 73293 reads in 8758 unique sequences.
## Sample 34 - 37175 reads in 5235 unique sequences.
## Sample 35 - 30216 reads in 4296 unique sequences.
## Sample 36 - 43608 reads in 5159 unique sequences.
## Sample 37 - 1795 reads in 465 unique sequences.
## Sample 38 - 162 reads in 69 unique sequences.



Merge paired reads

We now merge the forward and reverse reads together to obtain the full denoised sequences. Merging is performed by aligning the denoised forward reads with the reverse-complement of the corresponding denoised reverse reads, and then constructing the merged “contig” sequences. By default, merged sequences are only output if the forward and reverse reads overlap by at least 12 bases, and are identical to each other in the overlap region (but these conditions can be changed via function arguments).

The mergers object is a list of data.frames from each sample. Each data.frame contains the merged sequence, its abundance, and the indices of the forward and reverse sequence variants that were merged. Paired reads that did not exactly overlap were removed by mergePairs, further reducing spurious output.



Construct sequence table

We can now construct an amplicon sequence variant table (ASV) table, a higher-resolution version of the OTU table produced by traditional methods.


## 
##  221  222  223  227  228  239  246  249  250  251  252  253  254  255  256  257 
##    1    2    4    4    1    1    1    1    1   10  105 3259  140    9    5    2

The sequence table is a matrix with rows corresponding to (and named by) the samples, and columns corresponding to (and named by) the sequence variants. This table contains 3546 ASVs.


After viewing the distribution of read lengths, it looks like we have some that fall outside the expected range (244 - 264) so we will go ahead and remove these non target length sequences.

## 
##  246  249  250  251  252  253  254  255  256  257 
##    1    1    1   10  105 3259  140    9    5    2

This updated table now contains 3533 ASVs.



Remove chimeras

The core dada method corrects substitution and indel errors, but chimeras remain. Fortunately, the accuracy of sequence variants after denoising makes identifying chimeric ASVs simpler than when dealing with fuzzy OTUs. Chimeric sequences are identified if they can be exactly reconstructed by combining a left-segment and a right-segment from two more abundant “parent” sequences.

A total of 40 bimeras were identified from the 3493 input sequences, thus retaining 99.8% of sequences.



Track reads through the pipeline

As a final check of our progress, we can look at the number of reads that made it through each step in the pipeline:

Genotype Treatment Input Filtered Denoised Forward Denoised Reverse Merged Nonchimera
15E - 1 AMB MP+ 46968 44461 44392 44369 44242 44230
6B - 2 OAW MP+ 25343 23991 23882 23969 23811 23811
5B - 1 OAW Control 27398 26006 25982 25986 25976 25976
5aE - 1 AMB MP+ 45822 44152 44099 44109 43973 43973
10aA - 1 OAW MP+ 99390 94836 94653 94671 94353 94205
5aC - 2 AMB Control 77568 73174 73129 73130 73019 73019
15E - 2 OAW Control 23219 22069 22048 22045 22026 22026
15C - 1 OAW MP+ 82624 76841 76349 76438 73826 72033
10aB - 1 AMB MP+ 44719 41897 41807 41866 41495 41495
15E - 3 AMB Control 21016 19892 19813 19857 19742 19742
6D - 1 AMB Control 50868 48504 48363 48414 48090 48044
5aC - 1 OAW Control 33629 32048 31998 31994 31891 31859
11E - 2 OAW MP+ 93788 90475 90409 90402 90155 90146
5aA - 1 OAW MP+ 26111 24910 24846 24859 24708 24683
5D - 1 AMB Control 36048 34782 34765 34754 34710 34666
12B - 1 OAW MP+ 51802 50069 49983 50013 49874 49845
12E - 1 AMB Control 79689 76155 76112 76109 75821 75770
2E - 1 AMB MP+ 36469 34374 34349 34353 34289 34289
2B - 1 AMB Control 93851 90460 90421 90323 90201 90201
6B - 1 AMB MP+ 24767 23566 23506 23538 23364 23328
2aE - 1 OAW Control 56820 53651 53600 53596 53342 53342
10aC - 1 OAW Control 114866 110660 110347 110599 110242 110229
12E - 2 AMB MP+ 16088 14928 14822 14864 14767 14690
11E - 1 OAW Control 31587 30116 30045 30016 29943 29901
5C - 1 AMB MP+ 29051 27927 27900 27896 27851 27851
2D - 1 OAW MP+ 75311 73356 73247 73255 72971 72971
6E - 1 OAW Control 24571 23594 23583 23578 23571 23571
2aC - 1 AMB MP+ 21209 20343 20305 20329 20214 20185
10aD - 1 AMB Control 26776 25590 25551 25561 25491 25491
11A - 1 AMB MP+ 10752 10144 10095 10134 10026 10026
5C - 2 OAW MP+ 43598 42054 42021 42021 41807 41723
2A - 1 OAW Control 26542 25161 25122 25096 25002 25002
11B - 1 AMB Control 76572 73293 73263 73255 72994 72994
2aB - 1 OAW MP+ 38502 37175 37138 37084 36876 36876
12A - 1 OAW Control 31488 30216 30071 30042 29692 29692
2aD - 1 AMB Control 44886 43608 43583 43597 43543 43525
NA blank_control 1918 1795 1792 1792 1792 1792
NA blank_control 169 162 153 158 153 153



Assign taxonomy

It is common at this point, especially in 16S/18S/ITS amplicon sequencing, to assign taxonomy to the sequence variants. The DADA2 package provides a native implementation of the naive Bayesian classifier method for this purpose. The assignTaxonomy function takes as input a set of sequences to be classified and a training set of reference sequences with known taxonomy, and outputs taxonomic assignments with at least minBoot bootstrap confidence.

The dada2 package GitHub maintains the most updated versions of the Silva databases., but I downloaded the databases from the associated Zenodo. The versions in this GitHub repository, used here, were last updated on 26 July 2022.

Let’s inspect the taxonomic assignments:

##      Kingdom    Phylum             Class                 Order              
## [1,] "Bacteria" "Proteobacteria"   "Alphaproteobacteria" "Rickettsiales"    
## [2,] "Bacteria" "Proteobacteria"   "Gammaproteobacteria" "Pseudomonadales"  
## [3,] "Bacteria" "Cyanobacteria"    "Cyanobacteriia"      "Chloroplast"      
## [4,] "Bacteria" "Myxococcota"      "Myxococcia"          "Myxococcales"     
## [5,] "Bacteria" "Proteobacteria"   "Gammaproteobacteria" "Burkholderiales"  
## [6,] "Bacteria" "Campylobacterota" "Campylobacteria"     "Campylobacterales"
##      Family             Genus         Species
## [1,] "Fokiniaceae"      "MD3-55"      NA     
## [2,] "Pseudomonadaceae" "Pseudomonas" NA     
## [3,] NA                 NA            NA     
## [4,] "Myxococcaceae"    "P3OB-42"     NA     
## [5,] "Alcaligenaceae"   "Alcaligenes" NA     
## [6,] NA                 NA            NA

Great! We can now save this and hand it off to Phyloseq for further analyses.



Remove contaminations

Remove mitochondria, chloroplasts, and non-bacteria

## phyloseq-class experiment-level object
## otu_table()   OTU Table:         [ 3277 taxa and 38 samples ]
## sample_data() Sample Data:       [ 38 samples by 4 sample variables ]
## tax_table()   Taxonomy Table:    [ 3277 taxa by 7 taxonomic ranks ]

Removal of mitochondira, chloroplasts, and non-bacteria taxa reduced the total number of taxa from 3493 to 3277.



Remove neg control contamination

Just 5 out of the 3277 ASVs were classified as contaminants.


## phyloseq-class experiment-level object
## otu_table()   OTU Table:         [ 3272 taxa and 36 samples ]
## sample_data() Sample Data:       [ 36 samples by 5 sample variables ]
## tax_table()   Taxonomy Table:    [ 3272 taxa by 7 taxonomic ranks ]

Looks good! We have now cleaned up the sample data to remove contamination from non target organisms and those from the negative controls.



SECTION TITLE

Blast NCBI

Rarefy



Data visualization


Preliminary figures

Taking phyloseq data and making some preliminary visualizations based on DADA2 tutorial:



Principal component analyses

All microbiome


Core microbiome


Accessory microbiome



Shannon diversity



Simpson diversity



ITS2 Assessment

Abundance by treatment and genotype

Figure SXX. Relative abundance of major ITS2 types by coral fragment (x axis) and treatment (facets). Light green represent Symbiodinium spp. (A3) and dark green represents Breviolum spp. (B2).



Abundance by treatment only

Figure SXX. Relative abundance of major ITS2 types grouped by treatment (n = 4-5 corals per treatment). Light green represent Symbiodinium spp. (A3) and dark green represents Breviolum spp. (B2).



Session Information

All code was written by Colleen B. Bove, feel free to contact with questions.

Session information from the last run date on 28 July 2022:

## R version 3.6.3 (2020-02-29)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS Catalina 10.15.7
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats4    parallel  stats     graphics  grDevices utils     datasets 
## [8] methods   base     
## 
## other attached packages:
##  [1] janitor_2.1.0               vegan_2.5-7                
##  [3] lattice_0.20-45             permute_0.9-7              
##  [5] plotly_4.10.0               RColorBrewer_1.1-3         
##  [7] decontam_1.6.0              phyloseq_1.30.0            
##  [9] kableExtra_1.3.4            ShortRead_1.44.3           
## [11] GenomicAlignments_1.22.1    SummarizedExperiment_1.16.1
## [13] DelayedArray_0.12.3         matrixStats_0.62.0         
## [15] Biobase_2.46.0              Rsamtools_2.2.3            
## [17] GenomicRanges_1.38.0        GenomeInfoDb_1.22.1        
## [19] Biostrings_2.54.0           XVector_0.26.0             
## [21] IRanges_2.20.2              S4Vectors_0.24.4           
## [23] BiocParallel_1.20.1         BiocGenerics_0.32.0        
## [25] forcats_0.5.1               stringr_1.4.0              
## [27] dplyr_1.0.8                 purrr_0.3.4                
## [29] readr_2.1.2                 tidyr_1.2.0                
## [31] tibble_3.1.7                ggplot2_3.3.6              
## [33] tidyverse_1.3.1             dada2_1.20.0               
## [35] Rcpp_1.0.9                  knitr_1.33                 
## 
## loaded via a namespace (and not attached):
##  [1] colorspace_2.0-3       deldir_1.0-6           hwriter_1.3.2.1       
##  [4] ellipsis_0.3.2         snakecase_0.11.0       fs_1.5.2              
##  [7] rstudioapi_0.13        farver_2.1.1           fansi_1.0.3           
## [10] lubridate_1.8.0        xml2_1.3.3             splines_3.6.3         
## [13] codetools_0.2-18       cachem_1.0.6           ade4_1.7-18           
## [16] jsonlite_1.7.2         broom_1.0.0            cluster_2.1.2         
## [19] dbplyr_2.1.1           png_0.1-7              compiler_3.6.3        
## [22] httr_1.4.3             backports_1.4.1        lazyeval_0.2.2        
## [25] assertthat_0.2.1       Matrix_1.3-4           fastmap_1.1.0         
## [28] cli_3.3.0              htmltools_0.5.2        tools_3.6.3           
## [31] igraph_1.2.11          gtable_0.3.0           glue_1.6.2            
## [34] GenomeInfoDbData_1.2.2 reshape2_1.4.4         cellranger_1.1.0      
## [37] jquerylib_0.1.4        vctrs_0.4.1            multtest_2.42.0       
## [40] ape_5.6-1              svglite_2.1.0          nlme_3.1-155          
## [43] crosstalk_1.2.0        iterators_1.0.14       xfun_0.29             
## [46] rvest_1.0.1            lifecycle_1.0.1        zlibbioc_1.32.0       
## [49] MASS_7.3-55            scales_1.2.0           ragg_1.1.3            
## [52] hms_1.1.1              biomformat_1.14.0      rhdf5_2.30.1          
## [55] yaml_2.3.5             sass_0.4.0             latticeExtra_0.6-30   
## [58] stringi_1.7.8          highr_0.9              foreach_1.5.2         
## [61] rlang_1.0.4            pkgconfig_2.0.3        systemfonts_1.0.2     
## [64] bitops_1.0-7           evaluate_0.15          Rhdf5lib_1.8.0        
## [67] labeling_0.4.2         htmlwidgets_1.5.4      tidyselect_1.1.1      
## [70] plyr_1.8.7             magrittr_2.0.3         R6_2.5.1              
## [73] generics_0.1.3         DBI_1.1.3              mgcv_1.8-38           
## [76] pillar_1.8.0           haven_2.4.3            withr_2.5.0           
## [79] survival_3.2-13        RCurl_1.98-1.7         modelr_0.1.8          
## [82] crayon_1.5.1           interp_1.0-33          utf8_1.2.2            
## [85] tzdb_0.2.0             rmarkdown_2.13         jpeg_0.1-9            
## [88] grid_3.6.3             readxl_1.3.1           data.table_1.14.2     
## [91] reprex_2.0.1           digest_0.6.29          webshot_0.5.2         
## [94] textshaping_0.3.6      RcppParallel_5.1.5     munsell_0.5.0         
## [97] viridisLite_0.4.0      bslib_0.4.0